Goto

Collaborating Authors

 approximate posterior belief




A Proof of Proposition 1 We first follow the proof of the log-sum inequality to prove the following inequality: q u (y |D r) log q u (y |D

Neural Information Processing Systems

Define the function f (t) null t log t which is convex. This section discusses the sparse GP model that is used in the classification of the synthetic moon dataset in Sec. A GP is fully specified by its prior mean (i.e., assumed to be Given the latent function values (i.e., also known as inducing variables) On the other hand, Figs. 9 and 10 visualize the approximate posterior beliefs Let us consider the experiment in Sec. Figure 1 shows results of averaged KL divergences (i.e., performance metric described in Sec. 4) achieved by EUBO, rKL, and However, the fourth row of Table 3 shows that both EUBO and rKL do not perform that well. EUBO may suffer from poor unlearning performance when λ is too small. One may wonder how our unlearning methods can handle multiple users' request arriving sequentially Figure 12: Graphs of averaged KL divergence vs.



Variational Bayesian Unlearning

Nguyen, Quoc Phong, Low, Bryan Kian Hsiang, Jaillet, Patrick

arXiv.org Machine Learning

This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased. We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data. Using the variational inference (VI) framework, we show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief given the full data (i.e., including the remaining data); the latter prevents catastrophic unlearning that can render the model useless. In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging. We propose two novel tricks to tackle this challenge. We empirically demonstrate our unlearning methods on Bayesian models such as sparse Gaussian process and logistic regression using synthetic and real-world datasets.